首页> 外文OA文献 >Multi-View Deep Learning for Consistent Semantic Mapping with RGB-D Cameras
【2h】

Multi-View Deep Learning for Consistent Semantic Mapping with RGB-D Cameras

机译:用RGB-D进行一致语义映射的多视图深度学习   相机

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Visual scene understanding is an important capability that enables robots topurposefully act in their environment. In this paper, we propose a novelapproach to object-class segmentation from multiple RGB-D views using deeplearning. We train a deep neural network to predict object-class semantics thatis consistent from several view points in a semi-supervised way. At test time,the semantics predictions of our network can be fused more consistently insemantic keyframe maps than predictions of a network trained on individualviews. We base our network architecture on a recent single-view deep learningapproach to RGB and depth fusion for semantic object-class segmentation andenhance it with multi-scale loss minimization. We obtain the camera trajectoryusing RGB-D SLAM and warp the predictions of RGB-D images into ground-truthannotated frames in order to enforce multi-view consistency during training. Attest time, predictions from multiple views are fused into keyframes. We proposeand analyze several methods for enforcing multi-view consistency duringtraining and testing. We evaluate the benefit of multi-view consistencytraining and demonstrate that pooling of deep features and fusion over multipleviews outperforms single-view baselines on the NYUDv2 benchmark for semanticsegmentation. Our end-to-end trained network achieves state-of-the-artperformance on the NYUDv2 dataset in single-view segmentation as well asmulti-view semantic fusion.
机译:视觉场景理解是一项重要功能,可让机器人在其环境中有目的地行动。在本文中,我们提出了一种使用深度学习从多个RGB-D视图进行对象类分割的新颖方法。我们训练了一个深度神经网络,以半监督的方式预测从多个角度来看一致的对象类语义。在测试时,我们的网络语义预测可以比语义视图关键帧映射更一致地融合在语义上。我们的网络架构基于最近针对RGB的单视图深度学习方法和深度融合来进行语义对象类分割,并通过多尺度损失最小化来增强它。我们使用RGB-D SLAM获取相机轨迹,并将RGB-D图像的预测扭曲为地面truthannotated帧,以便在训练过程中实现多视图一致性。在证明时间,将来自多个视图的预测融合到关键帧中。我们提出并分析了在训练和测试过程中强制执行多视图一致性的几种方法。我们评估了多视图一致性训练的好处,并证明了深度特征的池化和多视图上的融合优于针对语义细分的NYUDv2基准上的单视图基线。我们的端到端训练有素的网络在单视图分割和多视图语义融合中达到了NYUDv2数据集的最新性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号